162 research outputs found

    Real-time terahertz imaging with a single-pixel detector

    Get PDF
    Terahertz (THz) radiation is poised to have an essential role in many imaging applications, from industrial inspections to medical diagnosis. However, commercialization is prevented by impractical and expensive THz instrumentation. Single-pixel cameras have emerged as alternatives to multi-pixel cameras due to reduced costs and superior durability. Here, by optimizing the modulation geometry and post-processing algorithms, we demonstrate the acquisition of a THz-video (32 Ă— 32 pixels at 6 frames-per-second), shown in real-time, using a single-pixel fiber-coupled photoconductive THz detector. A laser diode with a digital micromirror device shining visible light onto silicon acts as the spatial THz modulator. We mathematically account for the temporal response of the system, reduce noise with a lock-in free carrier-wave modulation and realize quick, noise-robust image undersampling. Since our modifications do not impose intricate manufacturing, require long post-processing, nor sacrifice the time-resolving capabilities of THz-spectrometers, their greatest asset, this work has the potential to serve as a foundation for all future single-pixel THz imaging systems

    Using iterated rational filter banks within the ARSIS concept for producing 10 m Landsat multispectral images

    No full text
    International audienceThe ARSIS concept is meant to increase the spatial resolution of an image without modification of its spectral contents by merging structures extracted from a higher resolution image of the same scene but in a different spectral band. It makes use of wavelet transforms and multiresolution analysis. It is currently applied in an operational way with dyadic wavelet transforms that limit the merging of images whose ratio of their resolution is a power of two. Nevertheless, provided some conditions, rational discrete wavelet transforms can be numerically approximated by rational filter banks which would enable a more general merging: indeed, in theory, the ratio of the resolution of the images to merge is a power of a certain family of rational numbers. The aim of this article is to examine whether the use of those approximations of rational wavelet transforms are efficient within the ARSIS concept. This work relies on a particular case: the merging of a 10 m SPOT Panchromatic image and a 30 m Landsat Thematic Mapper multispectral image to synthesize 10 m multispectral image called TM-HR

    Using iterated rational filter banks within the ARSIS method for producing 10 m Landsat multispectral images

    Get PDF
    International audienceThe ARSIS concept makes use of wavelet transforms and multiresolution analysis to improve the spatial resolutions of images from a set made of a low resolution image in the same spectral band and a high resolution image in an other spectral band. The paper deals with the use of rational filter banks in ARSIS. Indeed, first this concept was operationally applied with dyadic wavelet transforms that limit the merging of images with a ratio between spatial resolutions equals to a power of two. Provided some conditions, rational filter banks can be seen as a good approximation of rational wavelet transforms and, thus, enable a more general merging of images with ARSIS. The advantages of those rational filter banks compared to other methods are discussed and illustrated by an example of fusion of a 10 m SPOT Panchromatic image and a 30 m Landsat Thematic Mapper (TM) multispectral image into a synthetic 10 m multispectral image called hereafter TM-HR

    A CURE for noisy magnetic resonance images: Chi-square unbiased risk estimation

    Full text link
    In this article we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi-square random variable. We then consider the task of denoising squared-magnitude magnetic resonance image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and another in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon state-of-the-art methods for both simulated and actual magnetic resonance image data.Comment: 30 double-spaced pages, 11 figures; submitted for publicatio

    Sparse Image Restoration Using Iterated Linear Expansion of Thresholds

    Get PDF
    We focus on image restoration that consists in regularizing a quadratic data-fidelity term with the standard l1 sparse-enforcing norm. We propose a novel algorithmic approach to solve this optimization problem. Our idea amounts to approximating the result of the restoration as a linear sum of basic thresholds (e.g. soft-thresholds) weighted by unknown coefficients. The few coefficients of this expansion are obtained by minimizing the equivalent low-dimensional l1-norm regularized objective function, which can be solved efficiently with standard convex optimization techniques, e.g. iterative reweighted least square (IRLS). By iterating this process, we claim that we reach the global minimum of the objective function. Experimentally we discover that very few iterations are required before we reach the convergence

    An Iterative Linear Expansion of Thresholds for l1-based Image Restoration

    Get PDF
    This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an l1 regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically—each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimension. With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperform state-of-the-art algorithms in terms of both CPU time and number of iterations

    Self-Similarity: Part II—Optimal Estimation of Fractal Processes

    Full text link

    Sampling signals with finite rate of innovation

    Get PDF
    Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we showthat they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems

    A sampling theorem for periodic piecewise polynomial signals

    Get PDF
    We consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, piecewise polynomials. We demonstrate that by using an adequate sampling kernel and a sampling rate greater or equal to the number of degrees of freedom per unit of time, one can uniquely reconstruct such signals. This proves a sampling theorem for a wide class of signals beyond bandlimited signals. Applications of this sampling theorem can be found in signal processing, communication systems and biological systems
    • …
    corecore